perm filename PSYCHO[E83,JMC] blob
sn#722120 filedate 1983-07-24 generic text, type C, neo UTF8
COMMENT ā VALID 00010 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 psycho[e83,jmc] Version with more on AI in general
C00006 00003 More complex thoughts
C00009 00004 Promise -
C00011 00005 notes
C00016 00006 Boxes:
C00019 00007
C00020 00008 Here's a somewhat fanciful example:
C00023 00009 Artificial intelligence
C00024 00010 Current AI approaches to ascribing specific mental qualities
C00030 ENDMK
Cā;
psycho[e83,jmc] Version with more on AI in general
How much mind do machines have?
.cb The Little Minds of Thinking Machines
Ever since Descartes, philosophically minded people have
wrestled with the question of whether it is possible for machines
to think. People today interact more and more with computers -
both personal computers and the time-shared computers to which
the telephone system permits us to connect our personal computers
or just terminals. This makes the question of whether machines
can think and what kinds of thoughts they can have ever more
pertinent.
More specifically we can ask whether machines remember,
believe,
know,
intend,
like,
dislike,
want,
understand,
promise,
owe,
have rights or duties or
deserve reward or punishment.
Are these all or nothing questions or can we say machines understand
some things and not others or think to some extent?
My answer, based on some 30 years working in the field of
artificial intelligence, is that some machines have some of the intellectual
qualities to differing extents. Even some very simple machines can usefully
be regarded as having some intellectual qualities.
They can and will be given more
and more intellectual qualities; not even human intelligence is
a limit. However, artificial
intelligence is a difficult branch of science and engineering, and,
judging by present slow progress, it might take a long time.
Genetics took 100 years between Mendel's experiments with peas and
cracking the DNA code for proteins and it isn't done yet.
Present machines have almost no emotional qualities, and,
in my opinion, it would be a bad idea to give them any. We have
enough trouble figuring out our duties to our fellow humans and
to animals without creating a bunch of robots with qualities
that would allow anyone to feel sorry for them or would allow
them to feel sorry for themselves.
This article contains some arguments for this general point of
view, but mainly I'm interested in specifics. What mental
qualities do various present machines have, and what would be involved
in giving them specific qualities that they don't have?
More complex thoughts
It is easiest to understand the ascription of thoughts
to machines in circumstances when we also understand the machine
in physical terms. However, the payoff comes when either no-one
or only an expert understands the machine physically.
Here is a futuristic example:
Computers belonging to different companies transact business
with one another. Suppose company A has designed a complex system
called a Floogly and doesn't sell them very often. A potential customer asks for
a price quotation and a delivery schedule. The salesman asks his
computer. The "price quotation" program looks up the information
on Floogly and determines what parts are needed to put it together.
It then calls the "purchasing department" program which has the latest
electronic catalogs that determine who
akes what. The catalog contains
"advertisements". The "purchasing department" program telephones
programs in the computers belonging to other companies and inquires
about the price and delivery times for parts. After "purchasing department"
has determined what are the best buys it tells "price quotation" which
tells the salesman. Of course, if the parts are non-standard products,
the suppliers may have to inquire of their own suppliers. Here is
a possible dialog.
Company A purchasing:
Promise -
Intend -
Try -
notes
more things:
the dog example
analogy with number systems
complex example: cbcl?
Suppose someone says, "The dog wants to go out". He has ascribed
the mental quality of wanting to the dog without claiming that the dog
thinks like a human and can form out of its parts the thought, "I want
to go out".
When someone says, "The dog wants to go out", he is ascribing
the mental quality of wanting to the dog.
The statement isn't shorthand for something the dog did, because there
are many ways of knowing that a dog wants to go out. It also isn't
shorthand for a specific prediction of what the dog is likely to
do next. Nor do we know enough about the physiology of dogs for
it to be an abbreviation for some statement about the dog's nervous
system. It is useful because of its connection with all of these
things and because what it says about the dog corresponds in
an informative way with similar statements about people. It doesn't
commit the person who said it to an elaborate view of the mind
of a dog. For example, it doesn't commit a person to any position
about whether the dog has the mental machinery to know that it is
a dog or even to know that it wants to go out.
Our approach to the mental qualities of machines is similar,
except that with machines we have a big advantage. We can study the
ascription of mental qualities to machines that we already
understand as physical systems. We begin with a simple thermostat
controlling an electric blanket.
The instructions that came with one such
electric blanket said, %2"Place the control near the bed in a place that
is neither hotter nor colder than the room itself. If the control
is placed on a radiator or radiant heated floors, it will "think"
the entire room is hot and will lower your blanket temperature, making
your bed too cold. If the control is placed on the window sill
in a cold draft, it will "think" the entire room is cold and will
heat up your bed so it will be too hot."%1
We think that the blanket manufacturer was correct in his
use of "think", and it helped his customers get the best use out
of the blanket. It ascribes at most three thoughts to the blanket
control, namely "it's too hot", "it's too cold" and "it's ok".
Of course, we don't need to ascribe thoughts in order to
understand the thermostat, so we might hope to lessen controversy by
beginning with systems which require ascribing
thoughts in order to understand them.
Refusing to include the easy
cases of mental qualities is like starting the number system with
two on the grounds that you don't need zero, because if you
don't have anything you can't count it, and if you only have
one thing you don't need to count it. Endless arguments could
be made along these lines, but
mathematicians have found that the number
system as a logical system is best understood with zero and one included.
Boxes:
We will ascribe beliefs to machines for the same reason we
ascribe them to other people. It helps understand their behavior.
Machines have and will have varied little minds. Long before
we can make machines with human capability, we will have many machines
that cannot be understood except in mental terms.
From the the instructions that came with
an electric blanket: %2"Place the control near the bed in a place that
is neither hotter nor colder than the room itself. If the control
is placed on a radiator or radiant heated floors, it will "think"
the entire room is hot and will lower your blanket temperature, making
your bed too cold. If the control is placed on the window sill
in a cold draft, it will "think" the entire room is cold and will
heat up your bed so it will be too hot."%1
It wouldn't be a good idea to make machines that could reasonably
ascribed rights or duties. Who wants a bunch of whining robots
that have to be jailed when they behave badly?
No more mental qualities should be ascribed to a particular
machine than its structure and behavior warrant. The thermostat
can be considered to believe the room is too cold, but there is
no basis for considering it to believe that it is a thermostat
or have any other belief about itself.
We also need to be careful about the minds of animals. Recent
research indicates that an ape can learn that what it sees in a mirror
is itself and use the mirror to find a spot on its forehead which it
rubs off. Monkeys can't do this, and some psychologists infer that
this is because a monkey doesn't have the concept of itself.
Here's a somewhat fanciful example:
In ten or twenty years Minneapolis-Honeywell, which makes
many thermostats today, may try to sell you a really fancy home temperature
control system. It will know the preferences of temperature and
humidity of each member of the family and can detect who is in the
room. When several are in the room it makes what it considers a
compromise adjustment taking account who has most recently had
to suffer having the room climate different from what he
prefers. Honeywell has discovered that these compromises should be
modified according to a social rank formula devised by its psychologists
and determined by patterns of speech loudness. The brochure
describing how the beast works is rather lengthy and the real dope
is in a rather technical appendix in small print.
Now imagine that I went on about this thermostat until you
were bored and you skipped the rest of the paragraph. Confronted
with an uncomfortable room you form any of the following hypotheses
depending on what other information you had.
1. It's trying to do the right thing, but it can't because the valve
is stuck. But then it should complain.
2. It regards grandpa as more important than me, and it is keeping
the room hot in case he comes in.
3. It confuses me with granpa.
4. It has forgotten how I like it.
Artificial intelligence
Artificial intelligence (AI) is the science and engineering
of making computers
AI research usually involves programming
a computer to use specific concepts and to have specific mental
qualities. Each step is difficult, and different programs have
different mental qualities. Some programs acquire information from
people or other programs and plan actions for people that
involve what other people do. Such programs must ascribe beliefs,
knowledge and goals to other programs and people. Thinking about
when they should do so led to the considerations of this article.
Current AI approaches to ascribing specific mental qualities
use the symbolism of first order logic. In that symbolism,
speaking technically,
a suitable collection of functions and predicates must be given.
Certain formulas of this
logic are then axioms giving relations between the concepts and
conditions for ascribing them. These axioms are used by reasoning
programs as part of the process whereby the program decides what to
do. The formalisms require too much explanation to be included in
this article, but some of the criteria are easily given in English.
Beliefs and goals are ascribed in accordance with the
above-mentioned principle of rationality. Our object is to account
for as much behavior as possible by saying the machine or person
or animal does what it thinks will achieve its goals. It is especially
important to have what is called in AI and %2epistemologically adequate%1
system. Namely, we should be able to express the information our
program can actually get - not what we might be able to get if we
knew more about the neurophysiology of a human or the design of a
specific machine.
In general we cannot give definitions, because the concepts
form a system that we fit as a whole to phenomena. Similarly the
physicist doesn't give a definition of electron or quark. Electron
and quark are terms in a complicated theory that predicts
the results of experiments.
Indeed common sense psychology works in the same way. A
child learns to ascribe wants and beliefs to others in a complex
way that he never learns to encapsulate in definitions.
Nevertheless we can give approximate criteria for
some specific properties:
%2Intends%1 - We say that a machine intends to do something if we
can regard it as believing that it will attempt to do it. We may
know something that will deter it from making the attempt.
Like most mental concepts, intention is an intermediate in the causal
chain; an intention may be caused by a variety of stimuli and
predispositions and may result in action or be frustrated in a
variety of ways.
%2Tries%1 - This is important in understanding machines that have
a variety of ways of achieving a goal including possibly ways that
we don't know about. If the machine may do something we don't know
about but that can later be explained in relation to a goal, we have
no choice but to use "is trying" or some synonym to explain the
behavior.
%2Likes%1 - As in "A likes B". This involves a wanting B's welfare.
It requires that A be sophisticated enough to have a concept of
B's welfare.
%2Self-consciousness%1 -
Self consciousness is perhaps the most interesting mental quality to
humans. Human self consciousness involves at least the following:
1. Facts about the person's body as a physical object. This permits
reasoning from facts about bodies in general to one's own.
It also permits reasoning from facts about one's own body, e.g.
its momentum, to corresponding facts about other physical objects.
2. The ability to observe one's own mental processes and to form
beliefs and wants about them. A person can wish he were smarter
or didn't want a cigarette.
3. Facts about oneself as a having beliefs, wants, etc. among other
similar beings.
Some of the above attributes of human self-consciousness
are easy to program. For example, it is not hard to make a program
look at itself, and many AI programs do look at parts of themselves.
Others are more difficult. Also animals cannot be shown to have
more than a few. Therefore, many present and future programs can
best be described as partially self-conscious.